neural network certification
Reviews: Beyond the Single Neuron Convex Barrier for Neural Network Certification
Originality: The authors propose a novel relaxation (to the best of my knowledge) for networks with ReLU activations that tighten previously proposed relaxations that ignore the correlations between neurons in the network. The theoretical results are also novel (although unsurprising). However, it would be useful for the authors to better clarify the computational requirements and tightness of k-ReLU relative to DeepPoly and other similar relaxations and bound propagation methods like [13] and https://arxiv.org/abs/1805.12514, Quality: The theoretical results are accurate (albeit unsurprising) in my opinion. The experimental section is missing several important details in my opinion: 1) The authors say that experiments are performed on both MNIST and CIFAR-10, but the tables 2/3 only report numbers on MNIST.
Reviews: Beyond the Single Neuron Convex Barrier for Neural Network Certification
All reviewers were leaning towards acceptance. Unfortunately, in the discussion after the rebuttal it became clear that crucial parts of the paper could not be properly understood e.g. the set S in line 147 is a union of polyhedra whereas it seems that this should be an intersection. Moreover, the notation introduced by the authors (box cap to mean convex hull) was not helpful either. The evaluation is not very helpful as the authors evaluate mainly on non-robust models, whereas the gain on the only robust model (ConvBig) on MNIST is marginal and the same is true for CIFAR-10. It is thus hard to judge how significant the impact of the improved relaxation is for the verification of robust models. On the other hand the reviewers appreciated the idea of k-ReLU relaxation as it can also be used in other verification frameworks.
Beyond the Single Neuron Convex Barrier for Neural Network Certification
We propose a new parametric framework, called k-ReLU, for computing precise and scalable convex relaxations used to certify neural networks. The key idea is to approximate the output of multiple ReLUs in a layer jointly instead of separately. This joint relaxation captures dependencies between the inputs to different ReLUs in a layer and thus overcomes the convex barrier imposed by the single neuron triangle relaxation and its approximations. The framework is parametric in the number of k ReLUs it considers jointly and can be combined with existing verifiers in order to improve their precision. Our experimental results show that k-ReLU en- ables significantly more precise certification than existing state-of-the-art verifiers while maintaining scalability.
Riemannian data-dependent randomized smoothing for neural networks certification
Labarbarie, Pol, Hajri, Hatem, Arnaudon, Marc
Certification of neural networks is an important and challenging problem that has been attracting the attention of the machine learning community since few years. In this paper, we focus on randomized smoothing (RS) which is considered as the state-of-the-art method to obtain certifiably robust neural networks. In particular, a new data-dependent RS technique called ANCER introduced recently can be used to certify ellipses with orthogonal axis near each input data of the neural network. In this work, we remark that ANCER is not invariant under rotation of input data and propose a new rotationally-invariant formulation of it which can certify ellipses without constraints on their axis. Our approach called Riemannian Data Dependant Randomized Smoothing (RDDRS) relies on information geometry techniques on the manifold of covariance matrices and can certify bigger regions than ANCER based on our experiments on the MNIST dataset.
- Europe > France (0.14)
- Asia > Middle East > Jordan (0.05)
- North America > United States > Maryland > Baltimore (0.04)
- North America > Canada > Ontario > Toronto (0.04)
Beyond the Single Neuron Convex Barrier for Neural Network Certification
Singh, Gagandeep, Ganvir, Rupanshu, Püschel, Markus, Vechev, Martin
We propose a new parametric framework, called k-ReLU, for computing precise and scalable convex relaxations used to certify neural networks. The key idea is to approximate the output of multiple ReLUs in a layer jointly instead of separately. This joint relaxation captures dependencies between the inputs to different ReLUs in a layer and thus overcomes the convex barrier imposed by the single neuron triangle relaxation and its approximations. The framework is parametric in the number of k ReLUs it considers jointly and can be combined with existing verifiers in order to improve their precision. Our experimental results show that k-ReLU en- ables significantly more precise certification than existing state-of-the-art verifiers while maintaining scalability.